HBM memory Flash News List | Blockchain.News
Flash News List

List of Flash News about HBM memory

Time Details
2025-10-15
03:23
Sam Altman flags leap since NVIDIA DGX-1: AI hardware boom fuels NVDA demand, HBM supply tightness, and on-chain GPU plays like RNDR

According to @sama, the nine-year progress since NVIDIA’s DGX-1 delivery underscores the step-change in AI compute now powering large-scale model training, a key backdrop for AI hardware and related assets. source: @sama on X, Oct 15, 2025 NVIDIA launched DGX-1 in 2016 with eight Tesla P100 GPUs delivering up to 170 FP16 teraflops, establishing an early deep learning benchmark. source: NVIDIA blog, April 2016 By 2023, NVIDIA unveiled DGX GH200 systems interconnecting up to 256 Grace Hopper superchips with shared memory for trillion-parameter-scale workloads, highlighting orders-of-magnitude gains versus DGX-1. source: NVIDIA press release, May 2023 Demand for H100 and GH200-class accelerators has surged among cloud providers, driving NVDA data center product momentum and extended lead times. source: NVIDIA investor relations, August 2024 HBM supply remains tight as SK hynix and Micron expand HBM3/3E output to support AI accelerators, a constraint that can influence server delivery schedules and near-term build rates. source: SK hynix Q2 2024 results release; Micron HBM3E announcement 2024 For crypto exposure to AI compute, Render Network uses the RNDR token to pay for distributed GPU rendering and AI inference on its marketplace, directly linking token utility to GPU availability. source: Render Network documentation, 2024 Crypto-aligned data center operators have begun allocating capacity to AI workloads, exemplified by Core Scientific’s multi-year AI hosting agreement and Bit Digital’s AI compute business line, connecting crypto infrastructure to the AI capex cycle. source: Core Scientific press release, June 2024; Bit Digital corporate update, 2024

Source